The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
translated by 谷歌翻译
作为自动驾驶系统的核心部分,运动计划已受到学术界和行业的广泛关注。但是,由于非体力学动力学,尤其是在存在非结构化的环境和动态障碍的情况下,没有能够有效的轨迹计划解决方案能够为空间周期关节优化。为了弥合差距,我们提出了一种多功能和实时轨迹优化方法,该方法可以在任意约束下使用完整的车辆模型生成高质量的可行轨迹。通过利用类似汽车的机器人的差异平坦性能,我们使用平坦的输出来分析所有可行性约束,以简化轨迹计划问题。此外,通过全尺寸多边形实现避免障碍物,以产生较少的保守轨迹,并具有安全保证,尤其是在紧密约束的空间中。我们通过最先进的方法介绍了全面的基准测试,这证明了所提出的方法在效率和轨迹质量方面的重要性。现实世界实验验证了我们算法的实用性。我们将发布我们的代码作为开源软件包,目的是参考研究社区。
translated by 谷歌翻译
检测欺诈性交易是控制​​电子商务市场风险的重要组成部分。除了已经在生产中部署的基于规则和机器学习过滤器外,我们还希望使用图形神经网络(GNN)进行有效的实时推理,这对于在事务图中捕获多跃风风险传播非常有用。但是,在生产中实施GNN时出现了两个挑战。首先,在消息传递中不应考虑以预测过去中的动态图中的未来信息。其次,图形查询和GNN模型推断的延迟通常高达数百毫秒,这对于某些关键的在线服务来说是昂贵的。为了应对这些挑战,我们提出了一个批处理和实时的成立图拓扑(BRIGHT)框架,以进行端到端的GNN学习,以允许有效的在线实时推理。 Bright框架由图形转换模块(两阶段有向图)和相应的GNN体系结构(Lambda神经网络)组成。两阶段的指示图保证了通过邻居传递的信息仅来自历史支付交易。它分别由代表历史关系和实时链接的两个子图组成。 Lambda神经网络将推断分为两个阶段:实体嵌入的批次推断和交易预测的实时推断。我们的实验表明,在平均W.R.T.〜精确度中,BRIGHT优于基线模型> 2 \%。此外,BRIGHT在实时欺诈检测上在计算上是有效的。关于端到端性能(包括邻居查询和推理),BRIGHT可以将P99延迟降低> 75 \%。对于推理阶段,与传统GNN相比,我们的加速平均为7.8美元。
translated by 谷歌翻译
相互作用的粒子系统在科学和工程中起着关键作用。访问管理粒子相互作用定律是对此类系统的完整理解至关重要的。但是,固有的系统复杂性使粒子相互作用在许多情况下隐藏了。机器学习方法有可能通过将实验与数据分析方法相结合来学习相互作用的粒子系统的行为。但是,大多数现有的算法都集中在学习粒子水平的动力学上。学习成对相互作用,例如成对力或成对势能,仍然是一个开放的挑战。在这里,我们提出了一种适应图网络框架的算法,该算法包含一个边缘零件,以学习成对相互作用和节点部分,以在粒子级别对动力学进行建模。与在两个部分中使用神经网络的现有方法不同,我们在节点部分中设计了确定性操作员,该方法允许精确推断出与基本物理定律一致的成对相互作用,仅通过训​​练以预测粒子加速度。我们在多个数据集上测试了所提出的方法,并证明它在正确推断成对相互作用的同时也与所有数据集上的基础物理学一致,在正确推断成对相互作用方面取得了出色的性能。所提出的框架可扩展到较大的系统,并可以转移到任何类型的粒子相互作用。开发的方法可以支持对潜在粒子相互作用定律的更好理解和发现,从而指导具有目标特性的材料的设计。
translated by 谷歌翻译
在线零售平台,积极检测交易风险至关重要,以提高客户体验,并尽量减少财务损失。在这项工作中,我们提出了一种可解释的欺诈行为预测框架,主要由探测器和解释器组成。 Xfraud探测器可以有效和有效地预测进货交易的合法性。具体地,它利用异构图形神经网络来从事务日志中的信息的非渗透键入实体中学习表达式表示。 Xfraud中的解释器可以从图表中生成有意义和人性化的解释,以便于业务部门中的进一步进程。在我们对具有高达11亿节点和37亿边缘的实际交易网络上的Xfraud实验中,XFraud能够在许多评估度量中倾销各种基线模型,同时在分布式设置中剩余可扩展。此外,我们表明,XFraud解释者可以通过定量和定性评估来显着帮助业务分析来产生合理的解释。
translated by 谷歌翻译
Many interesting problems in machine learning are being revisited with new deep learning tools. For graph-based semisupervised learning, a recent important development is graph convolutional networks (GCNs), which nicely integrate local vertex features and graph topology in the convolutional layers. Although the GCN model compares favorably with other state-of-the-art methods, its mechanisms are not clear and it still requires considerable amount of labeled data for validation and model selection. In this paper, we develop deeper insights into the GCN model and address its fundamental limits. First, we show that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of oversmoothing with many convolutional layers. Second, to overcome the limits of the GCN model with shallow architectures, we propose both co-training and self-training approaches to train GCNs. Our approaches significantly improve GCNs in learning with very few labels, and exempt them from requiring additional labels for validation. Extensive experiments on benchmarks have verified our theory and proposals.
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
Learning the underlying distribution of molecular graphs and generating high-fidelity samples is a fundamental research problem in drug discovery and material science. However, accurately modeling distribution and rapidly generating novel molecular graphs remain crucial and challenging goals. To accomplish these goals, we propose a novel Conditional Diffusion model based on discrete Graph Structures (CDGS) for molecular graph generation. Specifically, we construct a forward graph diffusion process on both graph structures and inherent features through stochastic differential equations (SDE) and derive discrete graph structures as the condition for reverse generative processes. We present a specialized hybrid graph noise prediction model that extracts the global context and the local node-edge dependency from intermediate graph states. We further utilize ordinary differential equation (ODE) solvers for efficient graph sampling, based on the semi-linear structure of the probability flow ODE. Experiments on diverse datasets validate the effectiveness of our framework. Particularly, the proposed method still generates high-quality molecular graphs in a limited number of steps.
translated by 谷歌翻译